![]() Method and apparatus for processing a video signal (Machine-translation by Google Translate, not leg
专利摘要:
A method of processing a video in accordance with the present invention may comprise: generating a plurality of most probable mode candidates (MPM); determining whether there is an MPM candidate identical to a current block intraprediction mode among the plurality of MPM candidates; obtaining the intra-prediction mode of the current block, based on a result of the determination; and performing an intraprediction for the current block, based on the intraprediction mode of the current block. (Machine-translation by Google Translate, not legally binding) 公开号:ES2800551A2 申请号:ES202031209 申请日:2017-06-22 公开日:2020-12-30 发明作者:Bae Keun Lee 申请人:KT Corp; IPC主号:
专利说明:
[0001] Method and apparatus for processing a video signal [0003] Technical field [0005] The present invention relates to a method and apparatus for processing video signals. [0007] Previous technique [0009] Recently, the demand for high-resolution and high-quality images such as high definition (HD) images and ultra-high definition (UHD) images has increased in various fields of application. However, higher resolution and quality image data has increasing amounts of data compared to conventional image data. Therefore, when image data is transmitted using a medium such as conventional broadband and wireless networks, or when image data is stored using a conventional storage medium, transmission and storage costs increase. To solve these problems that occur with an increase in the resolution and quality of image data, high-efficiency image encoding / decoding techniques can be used. [0011] Image compression technology includes several techniques, including: an inter-prediction technique, for predicting a pixel value included in a current image from an image before or after the current image; an intraprediction technique of predicting a pixel value included in a current image by using pixel information in the current image; an entropy coding technique of assigning a short code to a value with a high occurrence frequency and assigning a long code to a value with a low occurrence frequency; etc. Image data can be effectively compressed using such image compression technology and can be transmitted or stored. [0013] Meanwhile, with the demand for high-resolution imaging, the demand for stereographic imaging content, which is a new imaging service, has also increased. A video compression technique is being discussed to effectively deliver ultra-high resolution and high resolution stereographic imaging content. [0015] Divulgation [0017] Technical problem [0019] An object of the present invention is intended to provide a method and apparatus for efficiently performing intra-prediction for a target encoding / decoding block in encoding / decoding a video signal. [0021] An object of the present invention is to provide a method and apparatus for performing an intraprediction for a target encoding / decoding block based on a plurality of reference lines. [0023] An object of the present invention is to provide a method and apparatus for replacing an unavailable reference sample with an available reference sample during a generation of a plurality of reference lines in encoding / decoding a video signal. [0025] An object of the present invention aims to provide a method and apparatus for calculating an average value of any one of a plurality of reference lines in encoding / decoding a video signal. [0027] The technical objectives to be achieved by the present invention are not limited to the technical problems mentioned above. Other technical problems that are not mentioned will be immediately understood by those skilled in the art from the following description. [0029] Technical solution [0031] A method and apparatus for decoding a video signal in accordance with the present invention can derive a plurality of reference sample lines for a current block, select a reference sample line used for intra-prediction of the current block among the plurality of reference sample lines, and perform intra-prediction for the current block using the selected reference sample line. At this time, if a reference sample not available in a first reference sample line Among the plurality of reference sample lines, the reference not available sample is replaced with an available reference sample included in the first reference sample line or in a second reference sample line different from the first reference sample line. [0033] A method and apparatus for encoding a video signal in accordance with the present invention can derive a plurality of reference sample lines for a current block, select a reference sample line used for intra-prediction of the current block among the plurality of reference sample lines, and perform intraprediction for the current block using the selected reference sample line. At this time, if a reference sample not available is included in a first reference sample line among the plurality of reference sample lines, the reference not available sample is replaced with an available reference sample included in the first line reference sample line or on a second reference sample line different from the first reference sample line. [0035] In the method and apparatus for encoding / decoding a video signal according to the present invention, the unavailable reference sample is replaced by an available reference sample having a shorter distance from the unavailable reference sample between the Available reference samples included in either the first reference sample line or the second reference sample line. [0037] In the method and apparatus for encoding / decoding a video signal according to the present invention, if a distance between the unavailable reference sample and an available reference sample included in the first reference line is equal to or greater than a threshold value, the value not available in the reference sample is replaced with an available reference sample included in the second reference line. [0039] In the method and apparatus for encoding / decoding a video signal according to the present invention, the second reference line has a higher index value than the first reference line. [0041] In the method and apparatus for encoding / decoding a video signal according to With the present invention, a prediction sample of the current block is generated based on an average value of a part of reference samples included in the selected reference sample line. [0043] In the method and apparatus for encoding / decoding a video signal according to the present invention, an amount of reference samples used to calculate the average value among the reference samples included in the selected reference sample line is determined in based on current block size. [0045] The features briefly summarized above for the present invention are only illustrative aspects of the detailed description of the invention that follows, but do not limit the scope of the invention. [0047] Advantageous effects [0049] In accordance with the present invention, efficient intraprediction can be made for a target encode / decode block. [0051] In accordance with the present invention, intraprediction for a target encode / decode block can be performed based on a plurality of reference lines. [0053] In accordance with the present invention, an unavailable reference sample is replaced with an available reference sample during a generation of a plurality of reference lines. [0055] In accordance with the present invention, the intra-prediction for a target encoding / decoding block by calculating an average value from one of the plurality of reference lines. [0057] The effects obtainable by the present invention are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood by those skilled in the art from the following description. [0059] Description of drawings [0060] Figure 1 is a block diagram illustrating a device for encoding a video in accordance with an embodiment of the present invention; [0062] Figure 2 is a block diagram of a device for decoding a video according to an embodiment of the present invention; [0064] Figure 3 is a drawing illustrating an example of hierarchical partitioning of a coding block based on a tree structure in accordance with an embodiment of the present invention. [0066] Figure 4 is a drawing illustrating a partition mode that can be applied to a coding block when the coding block is interpredicted encoded. [0068] Fig. 5 is a drawing illustrating types of predefined intra-prediction modes for a device for encoding / decoding a video in accordance with an embodiment of the present invention. [0069] Fig. 6 is a diagram illustrating a type of extended intraprediction modes according to an embodiment of the present invention; [0071] Figure 7 is a flow chart briefly illustrating an intraprediction method according to an embodiment of the present invention. [0073] FIG. 8 is a drawing illustrating a method for correcting a prediction sample of a current block based on differential information from neighboring samples in accordance with an embodiment of the present invention. [0075] Figures 9 and 10 are drawings illustrating a method of correcting a prediction sample based on a predetermined correction filter in accordance with an embodiment of the present invention. [0077] Figure 11 shows a range of reference samples for intraprediction in accordance with an embodiment to which the present invention is applied. [0079] Figures 12 to 14 illustrate an example of filtering on reference samples according to an embodiment of the present invention. [0080] Figure 15 is a diagram illustrating a plurality of reference sample lines in accordance with one embodiment of the present invention. [0082] FIG. 16 is a flow chart illustrating a method for performing intraprediction using an extended reference line in accordance with the present invention. [0084] Figure 17 is a drawing illustrating a plurality of reference lines for a non-square block in accordance with the present invention. [0086] Figure 18 is a drawing to explain an example where an unavailable reference sample is replaced with an available reference sample located at the shortest distance from the unavailable reference sample. [0088] Figures 19 and 20 are drawings to explain an embodiment in which the position of an available reference sample is adaptively determined according to a distance between an unavailable reference sample and an available reference sample included in the same reference line. than the reference sample not available. [0090] Figures 21 and 22 are drawings illustrating reference samples used to obtain an average value of a reference line in accordance with an embodiment to which the present invention is applied. [0092] Invention mode [0094] A variety of modifications can be made to the present invention and there are various embodiments of the present invention, examples of which will now be provided with reference to the drawings and described in detail. However, the present invention is not limited thereto, and the exemplary embodiments can be construed as including all modifications, equivalents, or substitutes in a technical concept and technical scope of the present invention. Like reference numbers refer to the like element described in the drawings. [0096] The terms used in the specification, 'first', 'second', etc. can be used to describe multiple components, but components must not be construed as limited to the terms. The terms are only used to differentiate a component from other components. For example, the "first" component may be referred to as the "second" component without departing from the scope of the present invention, and the "second" component may also be named similarly to the "first" component. The term 'and / or' includes a combination of a plurality of elements or any of a plurality of terms. [0098] It will be understood that when an element is simply referred to as 'connected to' or 'coupled to' another element without being 'directly connected to' or 'directly coupled to' another element in the present description, it may be 'directly' connected to 'or' directly coupled to 'another element or be connected or coupled to another element, having the other element intervening between them. In contrast, when an item is referred to as "directly attached" or "directly attached" to another feature or item, there are no intermediate items present. [0100] The terms used in the present specification are used simply to describe particular embodiments, and are not intended to limit the present invention. An expression used in the singular encompasses the expression of the plural, unless it has a clearly different meaning in context. In the present specification, terms such as "including", "having", etc. are to be understood. are intended to indicate the existence of the characteristics, numbers, stages, actions, elements, parts or combinations thereof that are described in the specification, and are not intended to exclude the possibility that one or more characteristics, numbers, stages, actions, elements, parts or combinations thereof may exist or be added. [0102] Now, embodiments of the present invention will be described in detail herein with reference to the accompanying drawings. Hereinafter, the same constituent elements in the drawings are denoted by the same reference numerals, and a repeated description of the same elements will be omitted. [0104] Figure 1 is a block diagram illustrating a device for encoding a video in accordance with one embodiment of the present invention. [0105] With reference to Figure 1, the device 100 for encoding a video may include: an image partition module 110, prediction modules 120 and 125, a transform module 130, a quantization module 135, a reordering module 160, an entropy encoding module 165, an inverse quantization module 140, an inverse transform module 145, a filter module 150, and a memory 155. [0107] The unit, as shown in figure 1 are shown independently to represent different characteristic functions from each other in the device to encode a video. Therefore, it does not mean that each constitutional part constitutes a separate constitutional hardware or software unit. In other words, each constitutional part includes each of the constitutional parts listed for convenience. Therefore, at least two constitutional parts of each constitutional part can be combined to form a constitutional part or a constitutional part can be divided into a plurality of constitutional parts to perform each function. The embodiment in which each constitutional part is combined and the embodiment in which a constitutional part is divided are also within the scope of the present invention, if they do not depart from the essence of the present invention. [0109] Furthermore, some of the constituents may not be indispensable constituents that perform essential functions of the present invention, but may be selective constituents that only enhance the performance thereof. The present invention can be implemented by including only the constitutional parts indispensable to implement the essence of the present invention, except the constituents used to improve performance. The structure that includes only the indispensable constituents, except the selective constituents used to improve performance only, is also within the scope of the present invention. [0111] Image partition module 110 can divide an input image into one or more processing units. Here, the processing unit can be a prediction unit (PU), a transform unit (TU) or a coding unit (CU). Image partition module 110 can divide an image into combinations of multiple coding units, prediction units, and transform units, and can code an image by selecting a combination of coding units, units of prediction and transformation units with a predetermined criterion (for example, cost function). [0113] For example, an image can be divided into several encoding units. A recursive tree structure, such as a quad tree structure, can be used to divide an image into encoding units. A coding unit that is divided into other coding units with an image or a larger coding unit as the root can be partitioned with secondary nodes corresponding to the number of partitioned coding units. An encoding unit that is no longer partitioned by a predetermined constraint serves as a leaf node. That is, when it is assumed that only one square partition is possible for a coding unit, one coding unit can be divided into at most four other coding units. [0115] Hereinafter, in the embodiment of the present invention, the encoding unit may mean a unit that performs encoding, or a unit that performs decoding. [0117] A prediction unit can be one of the partitions divided into a square or rectangular shape that is the same size in a single encoding unit, or a prediction unit can be one of the partitions partitioned to have a different shape / size in a single encoding unit. [0119] When a prediction unit subject to intraprediction based on a coding unit is generated and the coding unit is not the smallest coding unit, the intraprediction can be performed without dividing the coding unit into multiple NxN prediction units. [0121] The prediction modules 120 and 125 may include an interprediction module 120 that performs the interprediction and an intraprediction module 125 that performs the intraprediction. It can be determined whether to perform interprediction or intraprediction for the prediction unit, and detailed information (for example, an intraprediction mode, a motion vector, a reference image, etc.) can be determined according to each measurement method. prediction. Here, the processing unit subject to prediction may be different from the unit of processing for which the prediction method and detailed content are determined. For example, prediction method, prediction mode, etc. they can be determined by the prediction unit, and the transformation unit can perform the prediction. A residual value (residual block) between the generated prediction block and an original block can be input to the transformation module 130. In addition, the prediction mode information, the motion vector information, and so on. used for prediction can be encoded with the residual value by entropy encoding module 165 and can be transmitted to a device to decode a video. When using a particular encoding mode, it is possible to transmit to a video decoding device by encoding the original block as is without generating the prediction block through the prediction modules 120 and 125. [0123] The interprediction module 120 can predict the prediction unit based on information from at least one of a previous image or a subsequent image of the current image, or it can predict the prediction unit based on information from some coded regions in the image. current, in some cases. The interprediction module 120 may include a reference image interpolation module, a motion prediction module, and a motion compensation module. [0125] The reference image interpolation module can receive reference image information from memory 155 and can generate pixel information of a whole pixel or less than the whole pixel of the reference image. In the case of luminance pixels, a DCT-based 8-lead interpolation filter with different filter coefficients can be used to generate pixel information of a whole pixel or less than a whole pixel in units of 1/4 pixel. In the case of chrominance signals, a DCT-based 4-lead interpolation filter having a different filter coefficient can be used to generate pixel information of a whole pixel or less than a whole pixel in units of 1/8 of pixel. [0127] The motion prediction module can perform motion prediction based on the reference image interpolated by the reference image interpolation module. As methods for calculating a motion vector, various methods can be used, such as a full search based block matching algorithm (FBMA), a three-stage search (TSS), a new three-stage search algorithm (NTS), etc. The motion vector can have a motion vector value in units of 1/2 pixel or 1/4 pixel based on an interpolated pixel. The motion prediction module can predict a current prediction unit by changing the motion prediction method. As motion prediction methods, various methods can be used, such as skip method, combine method, AMVP (advanced motion vector prediction) method, intra-block copy method, etc. [0129] The intraprediction module 125 may generate a prediction unit based on reference pixel information adjacent to a current block that is pixel information in the current image. When the neighboring block of the current prediction unit is an interpredicted block, and therefore a reference pixel is an interpredicted pixel, the reference pixel included in the interpredicted block may be replaced by information from Reference pixels of a neighboring block subject to intraprediction. That is, when a reference pixel is not available, at least one reference pixel of available reference pixels may be used instead of information of reference not available pixels. [0131] Prediction modes in intraprediction may include a directional prediction mode that uses reference pixel information that depends on the direction of the prediction and a non-directional prediction mode that does not use directional information to perform the prediction. A mode for predicting the luminance information may be different from a mode for predicting the chrominance information, and for predicting the chrominance information, the intra-prediction mode information used to predict the luminance information or signal information can be used. predicted luminance. [0133] When performing intraprediction, when the size of the prediction unit is the same as the size of the transformation unit, the intraprediction can be performed on the prediction unit based on the pixels on the left, top left, and above the prediction unit. However, when performing intra-prediction, when the size of the prediction unit is different from the size of the transform unit, the intra-prediction can be performed using a reference pixel based on the transform unit. Also, intraprediction using NxN partition can be used only for drive of smaller encoding. [0135] In the intra-prediction method, a prediction block can be generated after applying an AIS filter (intra-adaptive filtering) to a reference pixel based on the prediction modes. The type of AIS filter applied to the reference pixel can vary. To perform the intra-prediction method, an intra-prediction mode of the current prediction unit can be predicted from the intra-prediction mode of the prediction unit adjacent to the current prediction unit. In predicting the prediction mode of the current prediction unit by using predicted mode information from the neighboring prediction unit, when the intra-prediction mode of the current prediction unit is the same as the intra-prediction mode of the unit of neighboring prediction, the information indicates that the prediction modes of the current prediction unit and the neighboring prediction unit are the same as each other can be transmitted using predetermined tagging information. When the prediction mode of the current prediction unit is different from the prediction mode of the neighboring prediction unit, entropy encoding can be performed to encode the prediction mode information of the current block. [0137] Furthermore, a residual block that includes information about a residual value that is different between the prediction unit subjected to prediction and the original prediction unit block may be generated based on the prediction units generated by the prediction modules 120 and 125. The residual block generated can be fed into the transformation module 130. [0139] The transformation module 130 can transform the residual block that includes the information about the residual value between the original block and the prediction unit generated by the prediction modules 120 and 125 by using a transformation method, such as the discrete transform of cosine (DCT), the discrete sine transform (DST), and KLT. The application of DCT, DST or KLT to transform the residual block can be determined based on the intra-prediction mode information of the prediction unit used to generate the residual block. [0141] The quantization module 135 can quantize the values transformed into a frequency domain by the transform module 130. The quantization coefficients can vary depending on the block or importance of an image. The Values computed by quantization module 135 may be provided to inverse quantization module 140 and rearrangement module 160. [0143] The reordering module 160 can rearrange the coefficients of the quantized residuals. [0145] The rearrangement module 160 can change a coefficient in the form of a two-dimensional block into a coefficient in the form of a one-dimensional vector through a coefficient scanning method. For example, the reordering module 160 can scan from a CC coefficient to a coefficient in a high frequency domain using a zigzag scanning method to change the coefficients to be in the form of one-dimensional vectors. Depending on the size of the transformation unit and the intra-prediction mode, scanning in the vertical direction where the coefficients in the form of two-dimensional blocks are scanned in the column direction or the scanning in the horizontal direction where the coefficients in the form of two-dimensional blocks are scanned in row direction can be used instead of zigzag scanning. That is, the scanning method between zigzag, scanning in the vertical direction, and scanning in the horizontal direction can be determined according to the size of the transformation unit and the intra-prediction mode. [0147] Entropy coding module 165 can perform entropy coding based on values calculated by reordering module 160. Entropy coding can use various coding methods, for example, Golomb exponential coding, variable-length coding context-sensitive (CAVLC) and context-sensitive binary arithmetic (CABAC) encoding. [0149] The entropy coding module 165 can encode a variety of information, such as residual value coefficient information and block type information of the coding unit, prediction mode information, partition unit information, unit information of prediction, transformation unit information, motion vector information, reference frame information, block interpolation information, filtering information, etc. of the reordering module 160 and the prediction modules 120 and 125. [0150] The entropy encoding module 165 may entropy encode the coefficients of the input of the encoding unit of the reordering module 160. [0152] The inverse quantization module 140 can inverse quantize the values quantized by the quantization module 135 and the inverse transform module 145 can inverse transform the values transformed by the transform module 130. The residual value generated by the inverse quantization module 140 and the inverse transform module 145 can be combined with the prediction unit predicted by a motion estimation module, a motion compensation module and the intraprediction module of the prediction modules 120 and 125, so that a block can be generated rebuilt. [0154] Filter module 150 may include at least one of an unblocking filter, a displacement correction unit, and an adaptive loop filter (ALF). [0156] The unblocking filter can eliminate block distortion that occurs due to boundaries between blocks in the reconstructed image. To determine whether to perform unblocking, the pixels included in multiple rows or columns in the block can be a basis for determining whether to apply the unblocking filter to the current block. When the unblocking filter is applied to the block, a strong filter or a weak filter can be applied depending on the required unblocking filter force. Furthermore, by applying the deblocking filter, the filtering in the horizontal direction and the filtering in the vertical direction can be processed in parallel. [0158] The offset correction module can correct the offset with the original image in units of one pixel in the image subject to unlock. To perform offset correction on a particular image, it is possible to use a method of applying offset in consideration of the edge information of each pixel or a method of partitioning pixels of an image into the predetermined number of regions, determining a region to be subject to an offset, and apply the offset to the given region. [0160] Adaptive Loop Filtering (ALF) can be performed based on the value obtained by comparing the filtered reconstructed image and the original image. The pixels included in the image can be divided into predetermined groups, a filter that will be applied to each of the groups and an individual filter can be performed for each group. Information on whether to apply ALF and a light signal can be transmitted by encoding units (CU). The shape and filter coefficient of a filter for ALF can vary depending on each block. Furthermore, the filter for ALF in the same way (fixed way) can be applied regardless of the characteristics of the target block of the application. [0162] The memory 155 may store the reconstructed block or the computed image through the filter module 150. The stored reconstructed image or block may be provided to the prediction modules 120 and 125 to perform interprediction. [0164] Figure 2 is a block diagram of a device for decoding a video according to an embodiment of the present invention. [0166] With reference to Figure 2, the device 200 for decoding a video may include: an entropy decoding module 210, a rearrangement module 215, an inverse quantization module 220, an inverse transform module 225, prediction modules 230, and 235, a filter module 240, and a memory 245. [0168] When a video bit stream is input from the device to encode a video, the input bit stream can be decoded according to a reverse process of the device to encode a video. [0170] The entropy decoding module 210 can perform entropy decoding in accordance with a reverse entropy encoding process by the entropy encoding module of the device for encoding a video. For example, depending on the methods performed by the device to encode a video, various methods can be applied, such as Golomb exponential encoding, context-sensitive variable length encoding (CAVLC), and context-sensitive binary arithmetic encoding (CABAC) . [0172] The entropy decoding module 210 can decode information about the intraprediction and interprediction made by the device to encode a video. [0174] The reordering module 215 can perform a reordering in the entropy of the bit stream decoded by the entropy decoding module 210 in based on the reordering method used on the device to encode a video. The reorganization module can reconstruct and rearrange the coefficients in the form of one-dimensional vectors to the coefficient in the form of two-dimensional blocks. The reordering module 215 can receive information related to the coefficient scan performed in the device to encode a video and can perform reordering through a reverse scan method of the coefficients based on the scan order performed in the device to encode. a video. [0176] The inverse quantization module 220 can perform an inverse quantization based on a quantization parameter received from the device for encoding a video and the rearranged coefficients of the block. [0178] The inverse transformation module 225 can perform the inverse transformation, that is, inverse DCT, inverse DST and inverse KLT, which is the reverse process of the transformation, that is, DCT, DST and KLT, performed by the transform module in the result of quantization by the device to encode a video. Inverse transformation can be performed based on a transfer unit determined by the device for encoding a video. The inverse transform module 225 of the device for decoding a video can selectively perform transform schemes (for example, DCT, DST and KLT) depending on multiple data, such as prediction method, current block size, prediction direction, etc. [0180] Prediction modules 230 and 235 may generate a prediction block based on prediction block generation information received from entropy decoding module 210 and previously decoded block or image information received from memory 245. [0182] As described above, just like the operation of the device to encode a video, when performing intraprediction, when the size of the prediction unit is the same as the size of the transform unit, the intraprediction can be performed in the unit of prediction based on pixels positioned to the left, top left and above the prediction unit. When performing intraprediction, when the size of the prediction unit is different from the size of the transformation unit, the intraprediction can be performed using a reference pixel based on the transformation unit. [0183] Also, intraprediction using NxN partition can be used only for the smallest encoding unit. [0185] Prediction modules 230 and 235 may include a prediction unit determination module, an interprediction module, and an intraprediction module. The prediction unit determination module can receive a variety of information, such as the prediction unit information, the prediction mode information of an intraprediction method, the motion prediction information of an interprediction method, etc. of the entropy decoding module 210, it can divide a stream encode the unit into prediction units, and can determine whether the interprediction or intraprediction is performed in the prediction unit. By using the information required in the interprediction of the current prediction unit received from the device to encode a video, the interprediction module 230 can perform the interprediction on the current prediction unit based on information from at least one of a previous image or an afterimage of the current image including the current prediction unit. Alternatively, the interprediction can be performed based on information from some previously reconstructed regions in the current image, including the current prediction unit. [0187] To perform the interprediction, it can be determined for the encoding unit which of a jump mode, a combination mode, an AMVP mode, and an inter-block copy mode is used as the motion prediction method of the prediction unit included in the encoding unit. [0189] The intraprediction module 235 can generate a prediction block based on the pixel information in the current image. When the prediction unit is a prediction unit subject to intra-prediction, the intra-prediction can be performed based on the information of the intra-prediction mode of the prediction unit received from the device for encoding a video. Intraprediction module 235 may include an adaptive intra filtering (AIS) filter, a reference pixel interpolation module, and a DC filter. The AIS filter performs filtering on the reference pixel of the current block, and the application of the filter can be determined according to the prediction mode of the current prediction unit. AIS filtering can be performed on the reference pixel of the current block using the prediction mode of the prediction unit and the AIS filter information received from the device to encode a video. When prediction mode of the current block is a mode where AIS filtering is not performed, AIS filter may not be applied. [0191] When the prediction mode of the prediction unit is a prediction mode where the intraprediction is performed based on the pixel value obtained by interpolating the reference pixel, the reference pixel interpolation module can interpolate the reference pixel to generate the reference pixel of a whole pixel or less than a whole pixel. When the prediction mode of the current prediction unit is a prediction mode where a prediction block is generated without interpolation of the reference pixel, the reference pixel cannot be interpolated. The DC filter can generate a prediction block through filtering when the prediction mode of the current block is a DC mode. [0193] The reconstructed image or block may be provided to the filter module 240. The filter module 240 may include the unblocking filter, the offset correction module, and the ALF. [0195] Information on whether or not the unblocking filter is applied to the corresponding block or picture and the information about which of the strong and weak filters is applied when the unblocking filter is applied can be received from the device to encode a video. The unblocking filter of the device for decoding a video can receive information about the unblocking filter of the device for encoding a video, and can perform unblocking filtering on the corresponding block. [0197] The offset correction module can perform offset correction on the reconstructed image based on the type of offset correction and the offset value information applied to an image when encoding. [0199] The ALF can be applied to the encoding unit based on the information on whether the ALF should be applied, the information on the ALF coefficient, etc., received from the device for encoding a video. The ALF information can be provided as included in a particular set of parameters. [0201] Memory 245 can store the reconstructed image or block for use as a reference image or block, and can provide the reconstructed image to an output module. [0202] As described above, in the embodiment of the present invention, for convenience of explanation, the encoding unit is used as a term representing a unit for encoding, but the encoding unit may serve as a unit that performs decoding, as well as encoding. [0204] Furthermore, a current block can represent a target block to be encoded / decoded. And, the current block can represent a coding tree block (or a coding tree unit), a coding block (or a coding unit), a transform block (or a transform unit), a block of prediction (or a prediction unit), or the like, depending on an encoding / decoding step. [0206] An image can be encoded / decoded by dividing into base blocks that have a square shape or a non-square shape. At this time, the base block can be referred to as a coding tree unit. Information on whether the coding tree unit has a square shape or non-square shape or the information on the size of the coding tree unit can be signaled through a set of sequence parameters, a set of parameters image or a split header. The coding tree unit can be divided into a quad tree or a binary tree structure so that one coding unit can be generated. [0208] Figure 3 is a drawing illustrating an example of hierarchical partitioning of a coding block based on a tree structure in accordance with an embodiment of the present invention. [0210] An input video signal is decoded in predetermined block units. Said predetermined unit for decoding the video input signal is a coding block. The coding block can be a unit that performs intra / interprediction, transformation and quantization. Furthermore, a prediction mode (eg, intraprediction mode or interprediction mode) is determined in units of a coding block, and the prediction blocks included in the coding block may share the determined prediction mode. The encoding block can be a square or non-square block having an arbitrary size in the range 8x8 to 64x64, or it can be a square or non-square block having a size of 128x128, 256x256 or more. [0212] Specifically, the coding block can be hierarchically divided based on at least one of a quad tree and a binary tree. Here, quad tree based partitioning can mean that a 2Nx2N coding block is divided into four NxN coding blocks, and binary tree based partitioning can mean that one coding block is divided into two coding blocks. Binary tree-based partitioning can be done symmetrically or asymmetrically. The coding block divided based on the binary tree can be a square block or a non-square block, such as a rectangular shape. Binary tree based partitioning can be done in a coding block where quadruple tree based partitioning is no longer performed. Quad tree based partitioning can no longer be performed on the binary tree partitioned coding block. [0214] To implement the quad-tree-based or binary-tree-based adaptive partition, the information indicating the quad-tree-based partition, the coding block size / depth information that the quad-tree-based partitioning allows, the information indicating the partition based on the binary tree, information about the size / depth of the coding block in which partitioning based on binary tree is allowed, information about the size / depth of the coding block in which the partition is not allowed Based on binary tree, information about whether partitioning based on binary tree is done in a vertical direction or a horizontal direction, etc. can be used. [0216] As shown in FIG. 3, the first code block 300 with the partition depth (split depth) of k can be divided into multiple second code blocks based on the quadruple tree. For example, the second code blocks 310-340 can be square blocks that are half the width and half the height of the first code block, and the partition depth of the second code block can be increased to k 1. [0218] The second coding block 310 with the partition depth of k 1 can be divided into multiple third coding blocks with the partition depth of k 2. The partition of the second coding block 310 can be perform selectively using one of the quad tree and the binary tree depending on a partition method. Here, the partition method can be determined based on at least one of the information indicating the quadruple tree based partition and the information indicating the binary tree based partition. [0220] When the second coding block 310 is divided according to the quadruple tree, the second coding block 310 can be divided into four third coding blocks 310a that are half the width and half the height of the second coding block, and the Partition depth of the third coding block 310a can be increased to k 2. On the contrary, when the second coding block 310 is divided according to the binary tree, the second coding block 310 can be divided into two third coding blocks. Here, each of the two third coding blocks can be a non-square block having a half width and half the height of the second coding block, and the partition depth can be increased to k 2. The second block of encoding can be determined as a non-square block of a horizontal or vertical direction depending on a partition direction, and the partition direction can be determined based on information about whether the binary tree-based partition is performed in a vertical direction or a horizontal direction. [0222] Meanwhile, the second coding block 310 can be determined as a leaf coding block that is no longer partitioned based on the quadruple tree or the binary tree. In this case, the sheet coding block can be used as a prediction block or a transform block. [0224] Like the partition of the second coding block 310, the third coding block 310a can be determined as a leaf coding block, or it can be further divided based on the quadruple tree or the binary tree. [0226] Meanwhile, the third coding block 310b partitioned based on the binary tree can be further divided into the coding blocks 310b-2 of a vertical direction or the coding blocks 310b-3 of a horizontal direction based on the binary tree, and the Partition depth of the relevant coding blocks can be increased to k 3. Alternatively, the third coding block 310b can be determined as a sheet coding block 310b-1 that is no longer partitioned based on the binary tree. In this case, the coding block 310b-1 can be used as a prediction block or a transform block. However, the above partitioning process can be performed in a limited way based on at least one of the information about the size / depth of the coding block that allows the partition based on quadruple tree, the information about the size / depth of the block encoding that allows partitioning based on that binary tree, and information about the size / depth of the encoding block that partitioning based on binary tree does not allow. [0228] A number of a candidate representing a size of a coding block may be limited to a predetermined number, or a size of a coding block in a predetermined unit may have a fixed value. As an example, the size of the encoding block in a stream or in an image can be limited to 256x256, 128x128 or 32x32. Information indicating the size of the coding block in the stream or in the picture can be signaled through a stream header or a picture header. [0230] An encoding block is encoded using at least one of the skip mode, intraprediction, interprediction, or skip method. Once a coding block is determined, a prediction block can be determined through the predictive partition of the coding block. The predictive partition of the coding block can be performed by a partition mode (Part_mode) that indicates a partition type of the coding block. A size or shape of the prediction block can be determined according to the partition mode of the coding block. For example, a size of a prediction block determined according to the partition mode can be equal to or less than a size of a coding block. [0232] Figure 4 is a diagram illustrating a partition mode that can be applied to a coding block when the coding block is interpredicted encoded. [0234] When a coding block is coded by interprediction, one of 8 partition modes can be applied to the coding block, as in the example shown in Figure 4. [0235] When a coding block is coded by intraprediction, a PART_2Nx2N partition mode or PART_NxN partition mode can be applied to the coding block. [0237] PART_NxN can be applied when an encoding block has a minimum size. Here, the minimum size of the coding block can be predefined in an encoder and a decoder. Or, the information on the minimum size of the coding block can be signaled through a bit stream. For example, the minimum size of the coding block can be indicated through a segment header, so that the minimum size of the coding block can be defined per segment. [0239] In general, a prediction block can be 64 ^ 64 at 4x4 in size. However, when a coding block is coded by interprediction, the prediction block can be restricted from being 4x4 in size to reduce the memory bandwidth when performing motion compensation. [0241] Fig. 5 is a drawing illustrating types of predefined intra-prediction modes for a device for encoding / decoding a video in accordance with an embodiment of the present invention. [0243] The device for encoding / decoding a video can perform intra-prediction using one of the predefined intra-prediction modes. Predefined intraprediction modes for intraprediction may include non-directional prediction modes (eg, a flat mode, a DC mode) and 33 directional prediction modes. [0245] Alternatively, to improve intra-prediction accuracy, a greater number of directional prediction modes can be used than the 33 directional prediction modes. That is, M extended directional prediction modes can be defined by subdividing the angles of the directional prediction modes (M> 33), and a directional prediction mode having a predetermined angle can be derived using at least one of the 33 Predefined directional prediction modes. [0247] A greater number of intraprediction modes than 35 intraprediction modes shown in figure 5 can be used. For example, a greater number of intraprediction modes can be used than the 35 intraprediction modes by subdividing the angles of the directional prediction modes or deriving a directional prediction mode that has a predetermined angle using at least one of a predefined number of modes. directional prediction. At this time, the use of a greater number of intraprediction modes than the 35 intraprediction modes can be referred to as an extended intraprediction mode. [0249] Figure 6 shows an example of the extended intraprediction modes, and the extended intraprediction modes may include two non-directional prediction modes and 65 extended directional prediction modes. The same numbers of the extended intraprediction modes can be used for a luminance component and a chrominance component, or a different number of intraprediction modes can be used for each component. For example, 67 extended intraprediction modes can be used for the luminance component, and 35 intraprediction modes can be used for the chrominance component. [0251] Alternatively, depending on the chrominance format, a different number of intraprediction modes can be used to perform the intraprediction. For example, in the case of 4: 2: 0 format, 67 intraprediction modes can be used for the luminance component to perform intraprediction and 35 intraprediction modes can be used for the chrominance component. In the case of 4: 4: 4 format, 67 intra-prediction modes can be used for the luminance component and the chrominance component to perform intra-prediction. [0253] Alternatively, depending on the size and / or shape of the block, a different number of intraprediction modes can be used to perform the intraprediction. That is, depending on the size and / or shape of the PU or CU, 35 intraprediction modes or 67 intraprediction modes can be used to perform intraprediction. For example, when the CU or PU is smaller than 64x64 or asymmetrically divided, 35 intraprediction modes can be used to perform intraprediction. When the size of the CU or PU is equal to or greater than 64x64, 67 intraprediction modes can be used to perform the intraprediction. 65 directional intraprediction modes can be allowed for Intra_2Nx2N, and only 35 directional intraprediction modes can be allowed for Intra_NxN. [0255] The size of a block to which the extended intraprediction mode applies can be set differently for each sequence, image, or segment. For example, the extended intraprediction mode is set to apply to a block (for example, CU or PU) that is larger than 64x64 in the first division. On the other hand, it is established that the extended intraprediction mode is applied to a block that is larger than 32x32 in the second division. The information representing the size of a block to which the extended intraprediction mode applies can be signaled in units of a sequence, an image, or a segment. For example, the information indicating the size of the block to which the extended intraprediction mode applies can be defined as 'log2_extended_intra_mode_size_minus4' obtained by taking a logarithm of the block size and then subtracting the whole number 4. For example, if a value of log2_extended_intra_mode_size_minus4 is 0, it can indicate that the extended intraprediction mode can be applied to a block size equal to or greater than 16x16. And if a value of log2_extended_intra_mode_size_minus4 is 1, it may indicate that the extended intraprediction mode can be applied to a block that is equal to or greater than 32x32 in size. [0257] As described above, the number of intraprediction modes can be determined by considering at least one of a color component, a chrominance format, and a size or shape of a block. Furthermore, the number of candidates for the intraprediction mode (for example, the number of MPM) used to determine an intraprediction mode of a current block to encode / decode can also be determined according to at least one of a color component , a color format, and the size or shape of a block. A method for determining an intra-prediction mode of a current block for encoding / decoding and a method for performing intra-prediction using the determined intra-prediction mode will be described with the drawings. [0259] Figure 7 is a flow chart briefly illustrating an intraprediction method according to an embodiment of the present invention. [0261] With reference to Fig. 7, an intra-prediction mode of the current block can be determined in step S800. [0262] Specifically, the intraprediction mode of the current block can be derived based on a candidate list and an index. Here, the candidate list contains multiple candidates, and the multiple candidates can be determined based on an intraprediction mode of the neighboring block adjacent to the current block. The neighboring block can include at least one of the blocks located at the top, bottom, left, right and corner of the current block. The index can specify one of multiple candidates from the candidate list. The candidate specified by the index can be set to the intraprediction mode of the current block. [0264] An intraprediction mode used for intraprediction in the neighboring block can be set as a candidate. In addition, an intraprediction mode having similar directionality as the neighboring block intraprediction mode can be set as a candidate. Here, the intra-prediction mode having similar directionality can be determined by adding or subtracting a predetermined constant value to or from the intra-prediction mode of the neighboring block. The default constant value can be an integer, such as one, two, or more. [0266] The candidate list may further include a predetermined mode. The default mode may include at least one of a planar mode, a DC mode, a portrait mode, and a landscape mode. The default mode can be adaptively added considering the maximum number of candidates that can be included in the candidate list of the current block. [0268] The maximum number of candidates that can be included in the candidate list can be three, four, five, six or more. The maximum number of candidates that can be included in the candidate list can be a preset value in the device for encoding / decoding a video, or it can be variably determined based on a characteristic of the current block. The characteristic can mean the location / size / shape of the block, the number / type of intraprediction modes that the block can use, a color type, a color format, etc. Alternatively, the information indicating the maximum number of candidates that can be included in the candidate list can be noted separately, and the maximum number of candidates that can be included in the candidate list can be variably determined using the information. The information indicating the maximum number of candidates can be pointed at at least one of a sequence level, an image level, a slice level, and a block level. [0269] When the extended intraprediction modes and the predefined intraprediction modes are selectively used, the intraprediction modes of neighboring blocks can be transformed into indices corresponding to the extended intraprediction modes, or into indices corresponding to the 35 intraprediction modes, by which candidates can be derived. For transformation to an index, a predefined table can be used, or a scale operation based on a predetermined value can be used. Here, the predefined table can define a mapping relationship between different groups of intra-prediction modes (eg, extended intra-prediction modes and 35 intra-prediction modes). [0271] For example, when the left neighbor block uses all 35 intraprediction modes and the left neighbor block intraprediction mode is 10 (a horizontal mode), it can be transformed into an index of 16 corresponding to a horizontal mode in the extended intraprediction modes. [0273] Alternatively, when the upper neighbor block uses the extended intraprediction modes and the intraprediction mode the upper neighbor block has an index of 50 (a vertical mode), it can be transformed into an index of 26 corresponding to a vertical mode in all 35 modes. intraprediction. [0275] According to the method described above for determining the intraprediction mode, the intraprediction mode can be derived independently for each luminance component and the chrominance component, or the intraprediction mode of the chrominance component can be derived depending on the intraprediction mode of the chrominance component. luminance. [0277] Specifically, the intraprediction mode of the chrominance component can be determined based on the intraprediction mode of the luminance component as shown in the following Table 1. [0279] Table 1 [0284] In Table 1, intra_chroma_pred_mode means flagged information to specify the intraprediction mode of the chrominance component, and IntraPredModeY indicates the intraprediction mode of the luminance component. [0286] With reference to FIG. 7, a reference sample for the intra-prediction of the current block can be derived in step S710. [0288] Specifically, a reference sample for intraprediction can be derived based on a neighboring sample from the current block. The neighboring sample can be a reconstructed sample from the neighboring block, and the reconstructed sample can be a reconstructed sample before a loop filter is applied or a reconstructed sample after the loop filter is applied. [0290] A neighbor sample reconstructed before the current block can be used as a reference sample, and a neighboring sample filtered based on a predetermined intra filter can be used as a reference sample. Filtering neighboring samples using an intra filter can also be called reference sample filtering. The intra filter may include at least one of the first intra filters applied to multiple adjacent samples located on the same horizontal line and the second intra filter applied to multiple neighboring samples located on the same vertical line. Depending on the positions of the neighboring samples, one of the first intra filter and the second intra filter can be selectively applied, or both intra filters can be applied. At this time, at least one filter coefficient of the first intra filter or the second intra filter may be (1, 2, 1), but is not limited thereto. [0292] Filtering can be done adaptively based on at least one of the intraprediction modes of the current block and the size of the transform block for the current block. For example, when the intraprediction mode of the current block is CC mode, portrait mode, or landscape mode, filtering may not be performed. When the block size of transformation is NxM, filtering cannot be performed. Here, N and M can be the same or different values, or they can be values of 4, 8, 16, or more. For example, if the size of the transform block is 4x4, filtering cannot be performed. Alternatively, filtering can be performed selectively based on the result of a comparison of a predefined threshold and the difference between the current block's intraprediction mode and vertical mode (or horizontal mode). For example, when the difference between the current block intraprediction mode and the vertical mode is greater than a threshold, filtering can be performed. The threshold can be defined for each transform block size as shown in Table 2. [0294] Table 2 [0299] The intra filter can be determined as one of the multiple predefined intra filter candidates in the device for encoding / decoding a video. For this purpose, an index that specifies an intra filter of the current block among the multiple intra filter candidates can be pointed to. Alternatively, the intra filter can be determined based on at least one of the current block sizes / shapes, the transform block size / shapes, information on the intensity of the filter and variations in neighboring samples. [0301] Referring to Fig. 7, intraprediction can be performed using the intraprediction mode of the current block and the reference sample in step S720. [0303] That is, the prediction sample of the current block can be obtained using the intra-prediction mode determined in step S700 and the reference sample derived in step S710. However, in the case of intra-prediction, a bounding sample of the neighboring block may be used, and therefore the quality of the prediction image may decrease. Therefore, a correction process can be performed on the prediction sample generated through the prediction process described above, and will be described in detail with reference to Figs. 8 to 10. However, the correction process is not limited. to apply only to the intraprediction sample, and may apply to an interprediction sample or the sample reconstructed. [0305] FIG. 8 is a drawing illustrating a method for correcting a prediction sample of a current block based on differential information from neighboring samples in accordance with an embodiment of the present invention. [0307] The prediction sample of the current block can be corrected based on the differential information of multiple neighboring samples for the current block. The correction can be performed on all prediction samples in the current block, or it can be performed on prediction samples in predetermined partial regions. The partial regions can be one row / column or multiple rows / columns, and these can be preset regions for correction in the device for encoding / decoding a video. For example, the correction can be made on a row / column located at a boundary of the current block or it can be made on a plurality of rows / columns from a boundary of the current block. Alternatively, the partial regions can be variably determined based on at least one of the current block sizes / shapes and the intra-prediction mode. [0309] Neighbor samples can belong to neighboring blocks located in the upper left and upper left corner of the current block. The number of neighboring samples used for correction can be two, three, four or more. The positions of neighboring samples can be variably determined depending on the position of the prediction sample that is the correction target in the current block. Alternatively, some of the neighboring samples may have fixed positions regardless of the position of the prediction sample that is the correction target, and the remaining neighboring samples may have varying positions depending on the position of the prediction sample that is the target of correction. [0311] The differential information of the neighboring samples can mean a differential sample between the neighboring samples, or it can mean a value obtained by scaling the differential sample by a predetermined constant value (eg, one, two, three, etc.). Here, the default constant value can be determined by considering the position of the prediction sample that is the correction target, the position of the column or row that includes the prediction sample that is the correction target, the position of the correction sample. prediction within the column or row, etc. [0313] For example, when the intraprediction mode of the current block is vertical mode, the differential samples between the upper left neighbor sample p (-1, -1) and the neighboring samples p (-1, y) adjacent to the left boundary of the block current can be used to obtain the final prediction sample as shown in Equation 1. [0315] Equation 1 [0317] P ( 0, y) = P (0, y) + ((p (-1, y) -p (-1, -1)) >> 1 for y = 0 ... N-1 [0319] For example, when the current block's intraprediction mode is horizontal mode, the differential samples between the upper left neighbor sample p (-1, -1) and the neighboring samples p (x, -1) adjacent to the upper limit of the block current can be used to obtain the final prediction sample as shown in Equation 2. [0321] Equation 2 [0323] P (x, 0) = P (x, 0) + ((p (x, -1) -p (-1, -1)) >> 1 for x = 0 ... N-1 [0325] For example, when the intraprediction mode of the current block is vertical mode, the differential samples between the upper left neighbor sample p (-1, -1) and the neighboring samples p (-1, y) adjacent to the left boundary of the block current can be used to obtain the final prediction sample. Here, the differential sample can be added to the prediction sample, or the differential sample can be scaled by a predetermined constant value, and then added to the prediction sample. The default constant value used in scaling can be determined differently depending on the column and / or row. For example, the prediction sample can be corrected as shown in Equation 3 and Equation 4. [0327] Equation 3 [0329] P (0, y) = P (0, y) + ((p (-1, y) -p (-1, -1)) >> 1 for y = 0 ... N-1 [0330] Equation 4 [0331] P ( 1, y) = P (0, y) + ((p (-1, y) -p (-1, -1)) >> 2 for y = 0 ... N-1 [0333] For example, when the current block's intraprediction mode is horizontal mode, the differential samples between the upper left neighbor sample p (-1, -1) and the neighboring samples p (x, -1) adjacent to the upper limit of the block current can be used to get the final prediction sample, as described in the case of portrait mode. For example, the prediction sample can be corrected as shown in Equation 5 and Equation 6. [0335] Equation 5 [0337] P (x, 0) = P (x, 0) + ((p (x, -1) -p (-1, -1)) >> 1 for x = 0 ... N-1 [0339] Equation 6 [0341] P (x, 1) = P (x, 1) + ((p (x, -1) -p (-1, -1)) >> 2 for x = 0 ... N-1 [0343] Figures 9 and 10 are drawings illustrating a method of correcting a prediction sample based on a predetermined correction filter in accordance with an embodiment of the present invention. [0345] The prediction sample can be corrected based on the neighboring sample of the prediction sample that is the correction target and a predetermined correction filter. Here, the neighboring sample can be specified by an angular line of the directional prediction mode of the current block, or it can be at least one sample placed on the same angular line as the prediction sample that is the correction target. Furthermore, the neighboring sample can be a prediction sample in the current block, or it can be a reconstructed sample in a neighboring block reconstructed before the current block. [0347] At least one of the number of taps, intensity, and a filter coefficient of the correction filter can be determined based on at least one of the prediction sample positions that is the correction target, regardless of whether the sample of prediction is the target of the correction is placed on the boundary of the current block, the intraprediction mode of the block current, the angle of the directional prediction mode, the prediction mode (inter or intra mode) of the neighboring block, and the size / shape of the current block. [0349] Referring to Figure 9, when the directional prediction mode has an index of 2 or 34, at least one prediction / reconstructed sample placed in the lower left of the prediction sample that is the correction target and the filter for Default correction can be used to obtain the final prediction sample. Here, the prediction / reconstructed sample in the lower left may belong to a line before a line that includes the prediction sample that is the correction target. The prediction / reconstructed sample in the lower left can belong to the same block as the current sample, or to the neighboring block adjacent to the current block. [0351] Filtering for the prediction sample can be done only on the line at the boundary of the block, or it can be done on multiple lines. The correction filter can be used where at least one of the number of filter taps and a filter coefficient is different for each of the lines. For example, you can use a filter (1/2, 1/2) for the first left line closest to the block boundary, you can use a filter (12/16, 4/16) for the second line, you can use a filter (14/16, 2/16) for the third line, and you can use a filter (15/16, 1/16) for the fourth line. [0353] Alternatively, when the directional prediction mode has an index of 3 to 6 or 30 to 33, filtering can be done at the block boundary as shown in figure 10, and a 3-lead correction filter can be used to correct the prediction sample. Filtering can be performed using the lower left sample of the prediction sample, which is the correction target, the lower sample of the lower left sample, and a 3-lead correction filter that takes the sample from prediction that is the correction target. The position of the neighboring sample used by the correction filter can be determined differently depending on the directional prediction mode. The filter coefficient of the correction filter can be determined differently depending on the directional prediction mode. [0355] Different correction filters can be applied depending on whether the neighboring block is encoded in the inter mode or the intra mode. When the neighbor block is encoded in intra mode, a filtering method can be used where gives more weight to the prediction sample, compared to when the neighboring block is inter-encoded. For example, in case the intraprediction mode is 34, when the neighboring block is encoded in the inter mode, a filter (1/2, 1/2) can be used, and when the neighboring block is encoded in the intra mode, a filter can be used (4/16, 12/16). [0357] The number of lines to be filtered in the current block may vary depending on the size / shape of the current block (for example, the coding block or the prediction block). For example, when the current block size is 32x32 or less, filtering can be done on a single line at the block boundary; otherwise, filtering can be done on multiple lines, including one line at the block boundary. [0359] Figures 9 and 10 are based on the case where the 35 intraprediction modes in Figure 4 are used, but can be applied in the same or similar way to the case where the extended intraprediction modes are used. [0361] Figure 11 shows a range of reference samples for intraprediction in accordance with an embodiment to which the present invention is applied. [0363] With reference to Figure 11, intraprediction can be performed using the reference samples P (-1, -1), P (-1, y) (0 <= y <= 2N-1) and P (x, -1 ) (0 <= x <= 2N-1) located on a boundary of a current block. At this time, filtering on reference samples is done selectively based on at least one of an intraprediction mode (e.g. index, directionality, angle, etc. of the intraprediction mode) of the current block or size of a transform block related to the current block. [0365] Filtering on reference samples can be performed using a predefined intra filter in an encoder and a decoder. For example, an intra filter with a filter coefficient of (1,2,1) or an intra filter with a filter coefficient of (2,3,6,3,2) can be used to obtain final reference samples for its use in intraprediction. [0367] Alternatively, at least one of a plurality of intra filter candidates may be selected to perform filtering on reference samples. In this case, the plurality of intra filter candidates may differ from each other by at least one of one filter intensity, a filter coefficient, or a derivation number (for example, a number of filter coefficients, a filter length). A plurality of intra filter candidates can be defined in at least one of a sequence, an image, a segment, or a block level. That is, a sequence, an image, a segment or a block where the current block is included can use the same plurality of intra filter candidates. [0369] Hereinafter, for convenience of explanation, it is assumed that a plurality of intra filter candidates includes a first intra filter and a second intra filter. It is also assumed that the first intra filter is a 3-lead filter (1,2,1) and the second intra filter is a 5-lead filter (2,3,6,3,2). [0371] When the reference samples are filtered by applying a first intra filter, the filtered reference samples can be derived as shown in Equation 7. [0373] Equation 7 [0376] P (- 1 , y -) = ( P (- 1, J, 1) 2 P (- 1 , y) + p ( - l, y - l ) 2) »2 [0377] P ( x, ~ 1) = (PO 1 1) + 2 PO, - 1) P (x - 11) 2) »2 [0379] When the reference samples are filtered by applying the second intra filter, the filtered reference samples can be derived as shown in the following equation 8. [0381] Equation 8 [0384] P (- l ^) = (2 P (- l 5> J + 2) 3 P (- l ^ l) 6 P (- l, 3;) 3 P (- l, 3; - l) 2 P (- l,>; - 2) 8) »4 P {x - 1) = (2 PO 2, -1) 3 PO 1, -1 ) + 6 P ( x - 1) 3 PO - 11 ) + 2 P ( x -2 - 1) 8) »4 [0386] In equations 7 and 8 above, x can be an integer between 0 and 2N-2, and y can be an integer between 0 and 2N-2. [0388] Alternatively, depending on the position of a reference sample, you can determining one of a plurality of intra filter candidates, and filtering on the reference sample can be performed using the determined. For example, a first intra filter can be applied to reference samples included in a first range, and a second intra filter can be applied to reference samples included in a second range. Here, the first range and the second range can be distinguished based on whether they are adjacent to a boundary of a current block, whether they are located on the top or left side of a current block, or whether they are adjacent to a corner of a current block. For example, as shown in figure 12, filtering on reference samples (P (-1, -1), P (-1,0), P (-1,1), ..., P (-1 , N-1) and P (0, -1), P (1, -1), ...) that are adjacent to a limit of the current block is done by applying a first intra filter as shown in Equation 7, and filtering on the other reference samples that are not adjacent to a boundary of the current block is done by applying a second reference filter as shown in Equation 8. It is possible to select one of a plurality of candidates within the filter based on the type of transformation used for a current block, and filter on reference samples using the selected one. Here, the transformation type can mean (1) a transformation scheme such as DCT, DST or KLT, (2) a transformation mode indicator such as a 2D transformation, a 1D or non-transformed transformation or (3) the number of transformations such as a first transform and a second transform. Hereinafter, for convenience of description, the transformation type is assumed to mean the transformation scheme such as DCT, DST and KLT. [0390] For example, if a current block is encoded using a DCT, filtering can be done using a first intra filter, and if a current block is encoded using a DST, filtering can be done using a second intra filter. Or, if a current block is encoded using DCT or DST, filtering can be done using a first intra filter, and if the current block is encoded using a KLT, filtering can be done using a second intra filter. [0392] Filtering can be done using a filter selected based on a current block transform type and a reference sample position. For example, if a current block is encoded using a DCT, filtering on the reference samples P (-1, -1), P (-1,0), P (-1,1) ___ P (-1, N- 1) and P (0, -1), P (1, -1), ..., P (N-1, -1) can be performed using a first intra filter, and filtering on other reference samples is can be done using a second intra filter. If a current block is encoded using a DST, filtering on the reference samples P (-1, -1), P (-1,0), P (-1,1), ..., P (-1, N- 1) and P (0, -1), P (1, -1), ..., P (N-1, -1) can be performed using a second intra filter, and filtering on other reference samples is can be done using a first intra filter. [0394] One of a plurality of intra filter candidates can be selected based on whether a transformation type of a neighboring block that includes a reference sample is the same as a transformation type of a current block, and filtering can be performed using the selected intrafilter candidate. For example, when a current block and a neighboring block use the same type of transformation, filtering is done using a first intra filter, and when the transformation types of a current block and a neighboring block are different from each other, the second Intra filter can be used to perform filtering. [0396] It is possible to select any of a plurality of intra filter candidates based on the type of transformation of a neighboring block and perform filtering on a reference sample using the selected one. In other words, a specific filter can be selected considering a type of transformation of a block where a reference sample is included. For example, as shown in figure 13, if a block adjacent to the left / bottom left of a current block is a block encoded using a DCT, and a block adjacent to the top / top right of a block current is a block coded using a DST, filtering reference samples adjacent to the left / bottom left of a current block is done by applying a first intra filter and filtering on reference samples adjacent to the top / top right of a current block is done by applying a second intra filter. [0398] In units of a predetermined region, you can define a usable filter in the corresponding region. Here, the unit of the predetermined region can be any of a sequence, an image, a sector, a group of blocks (for example, a row of coding tree units) or a block (for example, a tree unit of encoding) or, you can define another region that shares one or more filters. A reference sample can be filtered using a filter assigned to a region in which a current block is included. [0400] For example, as shown in Figure 14, it is possible to filter in reference samples using different filters in CTU units. In this case, the information indicating whether the same filter is used in a sequence or in an image, a type of filter used for each CTU, an index that specifies a filter used in the corresponding CTU among the available intra filter candidates can signaled by a Parameter Set Sequence (SPS) or a Picture Parameter Set (PPS). [0402] The intra filter described above can be applied in units of a coding unit. For example, filtering can be performed by applying a first intra filter or a second intra filter to reference samples around a coding unit. [0404] When determining an intraprediction mode of a current block, the intraprediction can be performed using a reference sample adjacent to the current block. For example, the prediction samples of a current block can be generated by averaging reference samples, or they can be generated by duplicating reference samples in a specific direction considering a directionality of an intra-prediction mode. As described above in an example referring to figure 11, P (-1, -1), P (-1, y) (0 <= y <= 2N-1), P (x, -1) (0 <= x <= 2N-1) that are located at the boundary of a current block can be used as reference samples. [0406] When it is determined that a sample included in a neighboring block adjacent to a current block is not available as a reference sample, the sample that is not available can be replaced with a reference sample that is available. For example, a neighboring sample may be determined to be unavailable in the event that a position of a sample included in a neighboring block is outside of an image, a sample included in a neighboring block is present in a different portion of a current block, or a sample included in a neighboring block is included in a coded block by an intraprediction. Here, if a sample included in a prediction-coded block is not available, it can be determined based on the information indicating whether a sample included in a prediction-coded block should be used as a reference sample when performing a intraprediction of a current block. In this case, the information can be a 1-bit flag (for example, 'restricted_intra_prediction_flag'), but it is not limited to that. For example, when a value of 'restricta_intra_prediction_flag' is 1, it can be determined that a sample included in a block coded by a prediction is not available as a reference sample. Hereinafter, a sample that cannot be used as a reference sample will be referred to as a reference unavailable sample. [0408] In the example shown in Figure 11, when it is determined that a sample located in the lower left corner (for example, P (-1, 2N-1)) is not available, the sample located in the lower left corner can be replaced with a first available reference sample in which you first seek to scan the available samples in a predetermined order. Here, the scan order can be performed sequentially from an adjacent sample to the lower left sample. For example, in the example shown in Figure 11, when a sample P (-1, 2N-1) is not available, the scan can be performed in an order from P (-1, -2N-2) to P (- 1, -1), P (-1) to P (2N-1, -1). P (-1, 2N-1) can be replaced with a first available reference sample found as a result of the scan. [0410] When a left reference sample, except for a reference sample located in the lower left corner, is not available, the left reference sample can be replaced with a reference sample adjacent to a lower part of the left reference sample. For example, a reference sample not available P (-1, y) between P (-1, 2N-1) and P (-1, -1) can be replaced with a reference sample P (-1, and 1) . [0412] When an upper reference sample is not available, the upper reference sample can be replaced with a reference sample adjacent to the left of the upper reference sample. For example, a reference sample not available P (x, -1) between P (0, -1) and P (2N-1, -1) can be replaced with a reference sample P (x-1, -1) . [0414] A set of reference samples for a current block may be called a "reference line" (or "intra-reference line" or "reference sample line"). Here, the reference line can include a set of reference samples made up of a row and a column. For example, in the example shown in figure 11, a 'reference line' a set of reference samples that includes P (-1, 2N-1) to P (-1, 1), P (0, -1 ) to P (2N-2, -1). An intraprediction of a current block can be made based on the samples of reference included in a reference line. An intraprediction of a current block can be performed, using reference samples included in a reference line, based on an intraprediction mode of a current block, for example, when an intraprediction mode of a current block is a CC mode, A prediction signal can be generated using a weighted and average prediction of the reference samples included in the reference line. For example, when an intra-prediction mode of a current block is a CC mode, prediction samples of the current block can be obtained according to Equation 9. [0416] Equation 9 [0418] [0420] P (0, y) = (P (- l, y) 3 * dcVal) »2 [0422] In Equation 9, dcVal can be generated based on an average value of samples, except P (-1, -1) among the reference samples included in a reference line. [0424] A flat mode provides effective prediction efficiency in a smooth area that does not have strong edges, and is effective in improving the discontinuity of the block boundary or the deterioration of the image quality of a boundary of the block. When an intraprediction mode of a current block is a flat mode, a provisional prediction sample in the horizontal direction of the current block can be obtained by using a reference sample adjacent to an upper right corner of the current block and a reference sample having a y coordinate identical to the provisional prediction sample in the horizontal direction, and a provisional prediction sample in the vertical direction of the current block can be obtained by using a reference sample adjacent to a lower left corner of the current block and a reference sample with a coordinate x identical to the provisional prediction sample in the vertical direction. For example, a provisional prediction sample in horizontal direction and a provisional prediction sample in vertical direction can be obtained from a current block according to Equation 10. [0425] Equation 10 [0427] Ph (x, y) = (N - l - x) * P (—l, y) (x 1) * P (N, —1) [0428] P 17 (x, y) = (N - 1 - y) * P (x, - l) (y 1) * P (-1, N) [0430] A prediction sample of a current block can be generated by adding a provisional prediction sample in the horizontal direction and a provisional prediction sample in the vertical direction, and then changing the result of the sum by a certain value according to the size of a block current. For example, a prediction sample of a current block can be obtained according to Equation 11. [0432] Equation 11 [0434] P (x, y) = (Ph (x, y) Pv (x, y) N) »( log2 {N ) 1 ) [0436] An intraprediction of a current block can be made using a plurality of reference lines. Assuming a current block has size WxH, the kth reference line can include p (-k, -k), reference samples located in a row identical to p (-k, -k) (for example, samples from reference of p (k 1, -k) to ap (WH 2 (k-1), -k) or reference samples of p (-k 1, -k) to ap (2W 2 (k-1), -k) ) and reference samples located in an identical column to p (-k, -k) (for example, reference samples from p (-k, -k 1) to p (-k, WH 2 (k -1)) or samples reference of p (-k, -k 1) to p (-k, 2H 2 (k-1))). [0438] Figure 15 exemplifies a plurality of reference sample lines. As in the example shown in Figure 15, when a first reference line adjacent to a boundary of a current block is called "reference line 0", the kth reference line can be set adjacent to the reference line (k -1) -th. [0440] Alternatively, unlike the example shown in Figure 15, it is also possible to configure all reference lines to have the same number of reference samples. [0442] An intraprediction of a current block can be made by at least one of a plurality of reference lines. A method of performing an intraprediction using a plurality of reference lines as described above it may be referred to as the "intraprediction method using an extended reference sample" or the "extended intraprediction method". Furthermore, a plurality of reference lines can be referred to as an 'extended reference line'. [0444] Whether or not to perform intra-prediction using an extended reference line can be determined based on the information signaled through a bit stream. Here, the information can be a 1-bit flag, but is not limited to it. Information on whether to perform intraprediction using an extended reference line can be reported in units of a coding tree unit, a coding unit, or a prediction unit, or it can be indicated in units of a sequence, an image, or a segment. That is, whether performing intraprediction using the extended reference line can be determined in units of a sequence, an image, a slice, a CTU, a CU, or a PU. [0446] Alternatively, whether or not to intrapredicte using an extended reference line can be determined based on at least one of the size, shape, depth or intraprediction modes of a current block. [0448] When it is determined to perform intraprediction using an extended reference line, a number of reference lines can be determined. Here, multiple reference lines can have a fixed value, and can be adaptively determined based on the size, shape or intra-prediction mode of a current block. For example, when an intraprediction mode of a current block is a non-directional mode, the intraprediction of the current block is performed using a reference line. When an intra-prediction mode of a current block is a directional mode, the intra-prediction of the current block can be performed using a plurality of reference lines. [0450] For a further example, a number of reference lines can be determined by the information that is indicated in units of a sequence, an image, a portion or a unit to be decoded. Here, the unit to be decoded may represent a coding tree unit, a coding unit, a transform unit, a prediction unit, or the like. For example, a syntax element 'max_intra_line_idx_minus2' indicating a number of available reference lines available in a sequence or segment can be signaled through a sequence header or a segment header. In this case, the number of available reference lines can be set to max_intra_line_idx_minus2 2. [0452] Next, a method for performing intra-prediction using an extended reference line will be described in detail. [0454] FIG. 16 is a flow chart illustrating a method for performing intraprediction using an extended reference line in accordance with the present invention. [0456] First, a decoder can generate a plurality of reference lines (S1610). Reference samples included in each reference line can be generated based on reconstructed samples included in decoded blocks before a current block. [0458] When an intraprediction mode of a current block is a directional mode, a decoder can generate a reference line considering a directionality of the intraprediction mode. Considering the directionality of an intraprediction mode, a greater number of reference samples can be included in the k-th reference line than in the (k-1) -th reference line. That is, a reference line away from a current block may include a greater number of reference samples than a reference line near the current block. [0460] Here, various reference samples further included in the kth reference line which in the (k-1) -th reference line can be variably determined according to a size, a shape or an intra-prediction mode of a current block. [0462] For example, when a current block is 4x4 in size, the k-th reference line may also include four reference samples (specifically, 2 in the horizontal direction and 2 in the vertical direction) than the reference line (k-1) -th. Also, when a current block is 8x8 in size, the k-th reference line may further include eight reference samples (specifically, 4 in the horizontal direction and 4 in the vertical direction) than the reference line (k-1) - th. [0463] With reference to figure 15, since the size of a current block is 4x4, it is exemplified that a first reference line includes a total of 9 reference samples and a second reference line includes a total of 13 (= 9 2x2) samples reference. [0465] When a current block is not square, several reference samples included in a reference line can be determined according to the horizontal and vertical lengths of a current block. [0467] For example, Figure 17 is a diagram that exemplifies a plurality of reference lines for a non-square block. Describing with Figures 15 and 17, as the width of a current block decreases to 1/2, a number of upper reference samples except one upper left reference sample included in a 0 reference line is reduced from 8 to 4 . [0469] That is, according to Figures 15 and 17, when a current block is assumed to have size WxH, the kth reference line can include a total of 2 {(WH) + 2 (k-1)} + 1 samples reference samples that include WH 2 (k-1) upper reference samples (or 2W 2 (k-1) upper reference samples) (that is, reference samples in the horizontal direction), WH 2 (k-1) left reference (or 2H 2 (k-1) left reference samples (that is, reference samples in vertical direction) and upper left reference sample. [0471] If a reference sample that is not available is included in a reference line, the reference not available sample can be replaced with a nearby available reference sample. At this time, the neighboring sample that replaces the unavailable reference sample can be included in the same reference line as the unavailable reference sample, or it can be included in the different reference line of the unavailable reference sample. [0473] For example, if a position of a reference sample is outside an image or in a different sector of a current block when intrapredicting using an extended reference line or if a reference sample is included in an interprediction-coded block When intraprediction is made using an extended reference line, the sample of reference can be determined as not available. The reference sample included in an interprediction-encoded block can be determined to be unavailable when a reference sample included in an interprediction-encoded block is set not to be used (for example, only when the value of the value of the intra_prediction_constraint_flag is 0). Or, if it is stated that an intraprediction-coded block must be decoded before an interprediction-coded block, the interprediction-coded block still cannot be reconstructed when the intraprediction-coded block. Consequently, a reference sample included in the block coded by interprediction may be determined to be unavailable. [0475] A reference sample used to replace an unavailable reference sample can be determined by considering a position of the unavailable reference sample, a distance between the unavailable reference sample and the available reference sample, or the like. For example, an unavailable sample can be replaced with an available sample that has a shorter distance than the unavailable reference sample. That is, an available reference sample that has the shortest distance and is selected by comparing a distance (first offset) between an available reference sample included in the same reference line with the unavailable reference sample and the unavailable sample and a distance (second offset) between an available reference sample included in a different reference line than the unavailable reference sample and the unavailable sample can be substituted for the unavailable reference sample. [0477] In the example shown in Figure 18, a distance between the unavailable reference sample included in the 0 reference line and the available reference sample included in the 0 reference line is represented as 4, and a distance between the sample reference sample not available included in reference line 0 and available The reference sample included in reference line 2 is 2. Therefore, the sample not available included in reference line 0 can be substituted using the reference sample available included in reference line 2. [0479] If a first offset and a second offset are equal, an unavailable reference sample can be replaced using an available reference sample included on the same reference line as the reference sample. reference not available. [0481] An unavailable reference sample can be replaced by using an available reference sample included in a different reference line than the unavailable reference sample only when there is a distance (i.e. a first offset) between an available sample included in the same line reference that the reference sample not available and the reference sample not available is equal to or greater than N. Alternatively, even when the first offset is equal to or greater than N, an available reference sample included in a line of different reference from a reference unavailable sample to replace the reference not available sample only when a second offset is smaller than the first offset. Here, N can represent an integer of 1 or more. [0483] If a first displacement is not equal to or greater than N, an unavailable reference sample can be replaced using an available reference sample included on the same reference line as the unavailable reference sample. [0485] Figures 19 and 20 show an example where an unavailable reference sample is replaced with an available reference sample when N is 2. If the distance between an unavailable reference sample included in reference line 0 and a reference sample Available reference sample included in reference line 0 is 2, as in the example shown in figure 19, the unavailable reference sample included in reference line 0 can be replaced using an available reference sample included in reference line 1. [0487] On the other hand, if a distance between an unavailable reference sample included in reference line 0 and an available reference sample included in reference line 0 is 1, as in the example shown in Figure 20, the Unavailable reference sample included in reference line 0 can be replaced using the available reference sample included in reference line 0. [0489] An unavailable reference sample can be replaced by using an available reference sample included on the same reference line as the unavailable reference sample or an available reference sample included on a reference line adjacent to a reference line that is It includes the reference sample not available. Here, a reference line adjacent to a reference line that includes the unavailable reference sample may refer to a reference line that has an index difference of 1 from the reference line, including the non-reference sample. available. Alternatively, the reference unavailable sample can be replaced with an available reference sample included in a reference line that has an index difference of two or more from the reference line, including the reference unavailable sample. [0491] Alternatively, an unavailable reference sample can be replaced by using an available reference sample included in a reference line that has a higher index value or that has a smaller index value than a reference line that includes the non-reference sample. available. For example, if you use a baseline that has an index value greater than a baseline that includes the unavailable baseline sample, you can use a baseline sample located to the left or at the top of the sample. Reference not available to replace the reference sample not available. [0493] Searching for an available reference sample to replace an unavailable reference sample can be done in a predefined direction. For example, to replace the missing reference sample, only one reference sample located in the direction of a top, bottom, left, or right of the unavailable sample can be used among the reference samples included in the same reference line. than the reference sample not available. Alternatively, to replace the unavailable sample, only a reference sample located in an upper, lower, left, or right direction of the unavailable sample can be used among the reference samples included in a different reference line from the reference sample. not available. [0495] A decoder can decode, based on a bit stream, index information specifying one of a plurality of reference lines (S1620). For example, when there are 4 reference lines available as in the example shown in Figure 15, the index information can specify any of the 4 reference lines. [0496] A baseline to perform intra-prediction for a current block can be adaptively determined based on the size of a current block, a current block type, an intra-prediction mode of a current block, index information on a neighboring block, or difference. between an intraprediction mode of a current block and a predetermined intraprediction mode, and the like. [0498] When determining any one of a plurality of reference lines, a decoder can perform an intra-prediction for a current block using the determined reference line (S1630). [0500] For example, when an intraprediction mode of a current block is DC mode, a prediction sample of the current block can be generated based on an average value (dcVal) of reference samples included in the determined reference line. Referring to Figures 21 and 22, a calculation of the average value of the reference samples included in the reference line will be described in detail. [0502] Alternatively, when an intra-prediction mode of a current block is a directional mode, a prediction sample of the current block may be generated based on a reference sample specified by the directional mode among the reference samples included in the determined reference line. At this time, if a line segment extending from a prediction sample to the direction indicated by the directional mode points between reference samples, the prediction sample of the current block can be generated based on a weighted sum (prediction weighted) of a first reference sample and a second reference sample lying on either side of the point indicated by the line segment extending in the direction indicated by the directional mode. [0504] When an intraprediction mode of a current block is the DC mode, it is necessary to calculate an average value (dcVal) of reference samples included in a reference line to perform the prediction in the current block. At this time, the average value for the reference samples in the k-th reference line can be calculated using only a part of the reference samples included in the k-th reference line. At this time, the number of reference samples used to derive the average value may be the same for each reference line, or it may be different for each reference line. reference. [0506] Alternatively, an average value for reference samples on the kth reference line can be derived using all reference samples included on the kth reference line. Alternatively, it can be determined based on the size of a current block, a shape of a current block, or a position of a reference line to either derive an average value using a part of reference samples at the k-th reference line or to derive an average value using all reference samples on the kth reference line. [0508] Figure 21 is a diagram illustrating reference samples used to derive an average value from a reference line. [0510] Figure 21 shows an example of derivation of a reference sample average value from the k-th reference line using a part of the reference samples included in a reference line. For example, an example illustrated in Figure 21, a reference sample average value of a first reference line adjacent to a current block (i.e., reference line 0 shown in Figure 21) can be calculated using top reference samples and left reference samples excluding a reference sample adjacent to an upper left corner of the current block. That is, when a current block size is NxN, a total of 4N reference samples such as 2N top reference samples and 2N left reference samples can be used for calculating the average value of the first reference line. [0512] The number of reference samples used to calculate an average value of the reference sample from the k-th reference line can be equal to the number of reference samples used to calculate the average value of a reference sample from the first line of reference. At this time, a position of a reference sample used to calculate the average value of the k-th reference line can correspond to a position of a reference sample used to calculate the average value of the reference sample of the first line reference. [0514] A reference sample at the k-th reference line corresponding to a reference sample from a first reference line may have the same x coordinate or the same y coordinate as the reference sample of the first reference line. For example, a coordinate of an upper reference sample included in the k-th reference line corresponding to an upper reference sample P (i, j) included in a first reference line can be P (i, jk 1) which has the same x coordinate as P (i, j). For example, a coordinate of a left reference sample on the kth reference line corresponding to a left reference sample P (i, j) included in a first reference line can be P (ik 1, j) having the same and coordinate as P (i, j). [0516] In Fig. 21, the reference samples of the second to fourth reference lines corresponding to the upper reference sample and the left reference sample are shown in a first reference line. An average reference sample value can be calculated from each reference line using the reference samples shown in Figure 21. [0518] In Fig. 21, a current block is assumed to have a square shape, but even if the current block has a non-square shape, the above embodiment can be applied as is. For example, when the current is a non-square block having size WxH, an average reference sample value of each reference line can be calculated using a total of 2 (WH) reference samples, such as 2W top reference samples and left reference samples 2H. Consequently, as in the example shown in Figure 22, the number of reference samples used to calculate an average value of the k-th reference line can have the same value as the number of reference samples used to calculate a value. average of the first reference line. In addition, the location of the reference sample used to calculate the average value of the k-th reference line may correspond to the location of the reference sample used to calculate the average value of the reference sample of the first reference line. . [0520] In Figures 21 and 22, the upper reference samples are used to double the width of a current block and for the left reference samples and twice the height of the current block to calculate the average value of a reference sample of a line reference. An average reference sample value of a reference line can be calculated using a smaller or larger number of reference samples than those shown in Figures 21 and Figure 22. [0521] For example, the average value of the reference sample from the reference line can be calculated using the same number of reference samples higher than the width of the current block and the same number of reference samples left as the height of the current block. [0523] The average value of a reference sample from a reference line can be calculated by assigning different weights to the reference samples, depending on the shape of a current block and the position of a reference sample. For example, if the current block is square in shape, the average value of the reference sample can be calculated by assigning the same weight to the top reference samples and the left reference samples. On the other hand, when the current block has a non-square shape, the average value of the reference sample can be calculated by assigning a higher weight to one of the upper reference samples and the left reference samples. For example, if the height of the current block is greater than a width, the average value can be calculated by assigning a higher weight to the reference samples that are higher than the reference samples on the left. On the other hand, when the width of the current block is greater than the height, the average value can be calculated by assigning a higher weight to the reference samples on the left than the reference samples above. [0525] For example, when the current block size is N / 2 * N, the average value of the k-th reference line dcVal can be calculated using the following Equation 12. [0527] Equation 12 [0532] For example, when the current block size is NxN / 2, the average value of the k-th reference line dcVal can be calculated using the following Equation 13. [0533] Equation 13 [0535] dcVal = {T i 2 XP ( -k J))> 2 N ( Y¿P ( h ~ k))> 2N [0536] i = or 1 = 0 [0537] In equations 12 and 13, k can be set to a value between 1 and max_intra_line_idx_minus2 2. [0539] In the example described through Fig. 16, it is exemplified that the index information specifying one of the plurality of reference lines is decoded after generating a plurality of reference lines. It is also possible to obtain only one reference line specified by the index information among a plurality of reference lines after decoding the index information that specifies one of the plurality of reference lines. [0541] In the embodiment described through Fig. 16, it is described that intraprediction for a current block is performed using any reference line specified by index information among a plurality of reference lines. It is also possible that intraprediction for a current block can be performed using two or more reference lines between a plurality of reference lines. The use or not of two or more reference lines to perform the intraprediction for a current block can be determined based on the information signaled from a bit stream, the size of a current block, the current block type, an intraprediction mode of a current block, whether an intra-prediction mode of a current block is a non-directional one or a difference between an intra-prediction mode of a current block and a predetermined intra-prediction mode, and the like. [0543] The two or more reference lines may be specified by a plurality of index information signaled from a bit stream. For example, when two reference lines are set to be used, either of the two reference lines can be specified by the first index information, and the other can be specified by the second index information. [0545] Alternatively, two or more reference lines can be spatially contiguous. In this case, the index information to specify any of the two or more reference lines can be signaled through a bit stream. If any of the two or more reference lines is selected by the index information, the remaining reference line can be automatically selected in a function of spatial adjacency with the selected reference line. For example, when configured to use two reference lines, and the index information indicates 'reference line 0', then intraprediction of a current block can be made based on reference line 0 and reference line 1 adjacent to reference line 0. [0547] When configured to use a plurality of reference lines, the intraprediction of a current block can be made based on an average value, a maximum value, a minimum value, or a weighted sum of reference samples included in the plurality of reference lines. . [0549] For example, assuming that an intraprediction mode of a current block is a directional mode (that is, an angular mode), a predicted sample of the current block can be generated based on a first reference sample and a second reference sample, each of which is included in a difference reference line. Here, a first reference line including the first reference sample and a second reference line including the second reference sample can be positioned side by side, but are not limited to them. Furthermore, the first reference sample and the second reference sample can be determined by an intra-prediction mode of the current block. The first reference sample and the second reference sample can be placed close to each other, but it is not limited to them. A prediction sample of a current block can be generated considering the weighted sum of the first reference sample and the second reference sample, or it can be generated based on an average value, a minimum value, or a maximum value of the first sample. reference and The second reference sample. [0551] Internal prediction of a current block can be performed by making a first intraprediction based on a part of a plurality of reference lines and making a second intraprediction based on the remaining reference lines. Here, an intraprediction mode used in a first intraprediction and an intraprediction mode used in a second intraprediction may be the same or different. A prediction sample of a current block can be generated based on a first prediction sample generated by performing a first intraprediction and a second prediction sample generated by performing a second intraprediction. [0552] The above embodiments have been described mainly in the decoding process, the encoding process can be performed in the same order as described or in reverse order. [0554] Although the embodiments described above have been described on the basis of a series of steps or flow charts, they do not limit the order of the time series of the invention, and can be performed simultaneously or in different orders as required. Furthermore, each of the components (eg, units, modules, etc.) that make up the block diagram in the embodiments described above may be implemented by a hardware or software device, and a plurality of components. Or a plurality of components can be combined and implemented using a single hardware or software device. The embodiments described above can be implemented in the form of program instructions that can be executed through various components of the computer and recorded on a computer-readable recording medium. The computer-readable recording medium can include one or a combination of program commands, data files, data structures, and the like. Examples of computer-readable media include magnetic media such as hard drives, floppy disks, and magnetic tapes, optical recording media such as CD-ROM and DVD, magneto-optical media such as floppy disks, media, and hardware devices specifically configured to store and execute the instructions of the computer. program, such as ROM, RAM, flash memory, etc. The hardware device can be configured to function as one or more software modules to perform the process according to the present invention, and vice versa. [0556] Industrial applicability [0558] The present invention can be applied to electronic devices that can encode / decode a video.
权利要求:
Claims (11) [1] 1. A method of decoding a video, the method comprises: determining a reference sample line for a current block based on a set of reference sample line candidates; perform intra-prediction for the current block based on reference samples included in the reference sample line; and rebuild the current block based on prediction samples resulting from the intraprediction, where a number of reference line candidates in the set of reference line candidates is determined based on information signaled at a sequence level, where, depending on the information, the reference sample line candidate set comprises only a first reference sample line adjacent to the current block, or comprises both, the first reference sample line and a second reference sample line not adjacent to the current block, where when the reference sample line candidate set comprises both the first reference sample line and the second reference sample line, the second reference sample line not adjacent to the current block is not used as the sample line reference of the current block when a predetermined intra-prediction mode is applied to the current block, the default intra-prediction mode comprises a Planar mode but not a DC mode, and wherein if an intraprediction mode of the current block is the DC mode, a prediction sample of the current block is obtained based on a DC value obtained based on the reference samples included in the reference sample line of the current block. [2] The method of claim 1, wherein the reference sample line for the current block is determined based on indexed information by specifying one of the reference sample line candidates. [3] The method of claim 1, wherein the reference samples comprise upper reference samples and left reference samples, and where if you use both the top reference samples and the left reference or use a part of the upper reference samples and the left reference samples to obtain the DC value, it is determined based on a shape of the current block. [4] The method of claim 3, wherein a number of the higher reference samples used to obtain the DC value is identical to a width of the current block. [5] The method of claim 3, wherein a number of the left reference samples used to obtain the DC value is identical to a current block height. [6] The method of claim 1, wherein the reference samples comprise upper reference samples and left reference samples, where the DC value is obtained by a weighted sum of the upper reference samples and the left reference samples, and where a first weight applied to the upper reference samples and a second weight applied to the left reference samples are determined based on a shape of the current block. [7] 7. A method for encoding a video, the method comprises: determining a reference sample line for a current block among a set of reference sample line candidates; perform intra-prediction for the current block based on reference samples included in the reference sample line; and obtain residual samples by subtracting prediction samples resulting from intraprediction of original samples, wherein the information used to determine a number of reference line candidates in the set of reference candidates is encoded at a sequence level, where the set of reference sample line candidates comprises only a first reference sample line adjacent to the current block, or comprises both the first reference sample line and a second reference sample line not adjacent to the current block, where when the reference sample line candidate set comprises both the first reference sample line and the second reference sample line, the second non-adjacent reference sample line the current block is not used as the reference sample line of the current block when a predetermined intraprediction mode is applied to the current block, the preset intraprediction mode comprises a Planar mode but not a DC mode, and wherein if an intraprediction mode of the current block is DC mode, a prediction sample of the current block is obtained based on a DC value of the reference samples included in the reference sample line of the current block. [8] The method of claim 7, wherein the index information specifying one of the reference sample line candidates is encoded in a bit stream. [9] The method of claim 7, wherein the reference samples comprise upper reference samples and left reference samples, and wherein whether to use both the upper reference samples and the left reference samples to obtain the DC value or to use a part of the upper reference samples and the left reference samples to obtain the DC value is determined based on a form of the current block. [10] The method of claim 9, wherein the reference samples comprise upper reference samples and left reference samples, and where the DC value is obtained by a weighted sum of the upper reference samples and the left reference samples, and where a first weight applied to the upper reference samples and a second weight applied to the left reference samples are determined based on a shape of the current block. [11] 11. A non-transient computer-readable medium for storing data associated with a video signal, comprising: a data stream stored on the non-transient computer-readable medium, where the data stream is encoded by an encoding method comprising: determining a reference sample line for a current block among a set of reference sample line candidates; perform intra-prediction for the current block based on reference samples included in the reference sample line; and obtain residual samples by subtracting prediction samples resulting from intraprediction of original samples, wherein the information used to determine a number of reference line candidates in the set of reference candidates is encoded at a sequence level, where the set of reference sample line candidates comprises only a first reference sample line adjacent to the current block, or comprises both the first reference sample line and a second reference sample line not adjacent to the current block, where when the reference sample line candidate set comprises both the first reference sample line and the second reference sample line, the second reference sample line not adjacent to the current block is not used as the sample line reference of the current block when a predetermined intra-prediction mode is applied to the current block, the default intra-prediction mode comprises a Planar mode but not a DC mode, and wherein if an intraprediction mode of the current block is DC mode, a prediction sample of the current block is obtained based on a DC value of the reference samples included in the reference sample line of the current block.
类似技术:
公开号 | 公开日 | 专利标题 ES2699723B2|2020-10-16|METHOD AND APPARATUS TO TREAT A VIDEO SIGNAL ES2703607B2|2021-05-13|Method and apparatus for processing video signals ES2710807B1|2020-03-27|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2692864B1|2019-10-21|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNS CN109417628B|2022-03-08|Video signal processing method and apparatus ES2710234B1|2020-03-09|Procedure and device for processing video signals ES2719132B1|2020-05-05|Procedure and device to process video signals ES2688624A2|2018-11-05|Method and apparatus for processing video signal ES2793489T3|2020-11-16|Chrominance block intra-prediction method using a luminance sample and apparatus using the same ES2737874B2|2020-10-16|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL ES2699749B2|2020-07-06|Method and apparatus for processing a video signal ES2711474A2|2019-05-03|Method and device for processing video signal ES2711223A2|2019-04-30|Method and device for processing video signal ES2711473A2|2019-05-03|Method and apparatus for processing video signal ES2703458A2|2019-03-08|Video signal processing method and device US20200077086A1|2020-03-05|Method and device for video signal processing
同族专利:
公开号 | 公开日 ES2724568B2|2021-05-19| CN113873240A|2021-12-31| EP3890326A1|2021-10-06| EP3477950A4|2020-02-19| CN109716773A|2019-05-03| PL3477950T3|2022-02-07| WO2017222325A1|2017-12-28| US11234015B2|2022-01-25| CN113873239A|2021-12-31| EP3477950A1|2019-05-01| ES2724568A2|2019-09-12| KR20180001478A|2018-01-04| CN109716773B|2021-10-29| ES2699723R1|2019-04-05| EP3477950B1|2021-08-04| ES2699723A2|2019-02-12| US20190313116A1|2019-10-10| CN113873238A|2021-12-31| ES2699723B2|2020-10-16| ES2724568R1|2020-05-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JP4213646B2|2003-12-26|2009-01-21|株式会社エヌ・ティ・ティ・ドコモ|Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program.| US8472522B2|2007-02-23|2013-06-25|Nippon Telegraph And Telephone Corporation|Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media which store the programs| KR101456279B1|2008-01-03|2014-11-04|한국전자통신연구원|Apparatus for coding or decoding intra image based on line information of reference iamge block| CN101742320B|2010-01-20|2012-09-12|李博航|Image processing method| KR101904948B1|2010-04-09|2018-10-08|엘지전자 주식회사|Method and apparatus for processing video data| US8902978B2|2010-05-30|2014-12-02|Lg Electronics Inc.|Enhanced intra prediction mode signaling| KR102086145B1|2010-12-13|2020-03-09|한국전자통신연구원|Method for intra prediction and apparatus thereof| KR101444667B1|2011-01-15|2014-09-30|에스케이 텔레콤주식회사|Video Coding Method and Apparatus Using Bi-Direction Intra Prediction| KR101789478B1|2011-03-06|2017-10-24|엘지전자 주식회사|Intra prediction method of chrominance block using luminance sample, and apparatus using same| CA3082413C|2011-04-25|2022-01-25|Lg Electronics Inc.|Intra-prediction method, and video encoder and decoder using same| KR20120140181A|2011-06-20|2012-12-28|한국전자통신연구원|Method and apparatus for encoding and decoding using filtering for prediction block boundary| RU2619706C2|2011-06-28|2017-05-17|Самсунг Электроникс Ко., Лтд.|Method and device for encoding video, and method and device for decoding video which is accompanied with internal prediction| CN102857752B|2011-07-01|2016-03-30|华为技术有限公司|A kind of pixel prediction method and apparatus| KR101348544B1|2011-08-17|2014-01-10|주식회사 케이티|Methods of intra prediction on sdip and apparatuses for using the same| KR20130027400A|2011-09-07|2013-03-15|주식회사 케이티|Method and apparatus for intra prediction in dc mode| US9237356B2|2011-09-23|2016-01-12|Qualcomm Incorporated|Reference picture list construction for video coding| US9154796B2|2011-11-04|2015-10-06|Qualcomm Incorporated|Intra-mode video coding| JP2013141187A|2012-01-06|2013-07-18|Sony Corp|Image processing apparatus and image processing method| US10499053B2|2014-10-31|2019-12-03|Mediatek Inc.|Method of improved directional intra prediction for video coding| KR102267922B1|2015-09-23|2021-06-22|노키아 테크놀로지스 오와이|How to code 360 degree panoramic video, device and computer program product| CN105842694B|2016-03-23|2018-10-09|中国电子科技集团公司第三十八研究所|A kind of self-focusing method based on FFBP SAR imagings| KR20180129863A|2016-04-25|2018-12-05|엘지전자 주식회사|Image decoding method and apparatus in video coding system| WO2017190288A1|2016-05-04|2017-11-09|Microsoft Technology Licensing, Llc|Intra-picture prediction using non-adjacent reference lines of sample values| US10484712B2|2016-06-08|2019-11-19|Qualcomm Incorporated|Implicit coding of reference line index used in intra prediction|JP2019525577A|2016-07-18|2019-09-05|エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュートElectronics And Telecommunications Research Institute|Image encoding / decoding method, apparatus, and recording medium storing bitstream| CN111758253A|2018-02-23|2020-10-09|英迪股份有限公司|Image encoding method/apparatus, image decoding method/apparatus, and recording medium storing bit stream| KR20190113653A|2018-03-27|2019-10-08|주식회사 케이티|Method and apparatus for processing video signal| WO2019194439A1|2018-04-02|2019-10-10|엘지전자 주식회사|Image coding method using context-based intra prediction mode information coding and apparatus therefor| SG11202013040VA|2018-06-25|2021-01-28|Guangdong Oppo Mobile Telecommunications Corp Ltd|Intra-frame prediction method and device| KR20200041810A|2018-10-12|2020-04-22|주식회사 엑스리스|Method for encodign/decodign video signal and apparatus therefor| US20220038691A1|2018-11-04|2022-02-03|Lg Electronics Inc.|Intra prediction method and apparatus in image coding system| WO2020175918A1|2019-02-26|2020-09-03|엘지전자 주식회사|Intra prediction-based image coding method and apparatus using unified mpm list| CN114125468A|2019-03-21|2022-03-01|华为技术有限公司|Intra-frame prediction method and device| WO2020197444A1|2019-03-24|2020-10-01|Huawei Technologies Co., Ltd.|Method and apparatus of quantizing coefficients for matrix-based intra prediction technique| WO2020256597A1|2019-06-21|2020-12-24|Huawei Technologies Co., Ltd.|Matrix-based intra prediction for still picture and video coding| CN113347426A|2019-06-21|2021-09-03|杭州海康威视数字技术股份有限公司|Encoding and decoding method, device, equipment and storage medium| WO2020256599A1|2019-06-21|2020-12-24|Huawei Technologies Co., Ltd.|Method and apparatus of quantizing coefficients for matrix-based intra prediction technique| WO2020251416A2|2019-09-30|2020-12-17|Huawei Technologies Co., Ltd.|Affine motion model restrictions reducing number of fetched reference lines during processing of one block row with enhanced interpolation filter| WO2021034229A2|2019-12-16|2021-02-25|Huawei Technologies Co., Ltd.|Method and apparatus of reference sample interpolation filtering for directional intra prediction| WO2021136504A1|2019-12-31|2021-07-08|Beijing Bytedance Network Technology Co., Ltd.|Cross-component prediction with multiple-parameter model|
法律状态:
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR20160079639|2016-06-24| KR20160079638|2016-06-24| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|